Bi-Level Actor-Critic for Multi-Agent Coordination

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments

We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers...

متن کامل

A BI-LEVEL LINEAR MULTI-OBJECTIVE DECISION MAKING MODEL WITH INTERVAL COEFFICIENTS FOR SUPPLY CHAIN COORDINATION

  Bi-level programming, a tool for modeling decentralized decisions, consists of the objective(s) of the leader at its first level and that is of the follower at the second level. Three level programming results when second level is itself a bi-level programming. By extending this idea it is possible to define multi-level programs with any number of levels. Supply chain planning problems are co...

متن کامل

A bi-level linear multi-objective decision making model with interval coefficients for supply chain coordination

Abstract: Bi-level programming, a tool for modeling decentralized decisions, consists of the objective(s) of the leader at its first level and that is of the follower at the second level. Three level programming results when second level is itself a bi-level programming. By extending this idea it is possible to define multi-level programs with any number of levels. Supply chain planning problem...

متن کامل

ACCNet: Actor-Coordinator-Critic Net for "Learning-to-Communicate" with Deep Multi-agent Reinforcement Learning

Communication is a critical factor for the big multi-agent world to stay organized and productive. Typically, most multi-agent “learning-to-communicate” studies try to predefine the communication protocols or use technologies such as tabular reinforcement learning and evolutionary algorithm, which can not generalize to changing environment or large collection of agents. In this paper, we propos...

متن کامل

Hierarchical Actor-Critic

The ability to learn at different resolutions in time may help overcome one of the main challenges in deep reinforcement learning — sample efficiency. Hierarchical agents that operate at different levels of temporal abstraction can learn tasks more quickly because they can divide the work of learning behaviors among multiple policies and can also explore the environment at a higher level. In th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence

سال: 2020

ISSN: 2374-3468,2159-5399

DOI: 10.1609/aaai.v34i05.6226